fix(resolution-search): resolve TypeError in webpack-runtime by refactoring utils and config#494
Conversation
…toring utils and config Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
|
WalkthroughThis PR extracts and reimplements the AI model resolver into Changes
Sequence Diagram(s)mermaid Agent->>Connector: request connection (on-demand transport) Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
ⓘ You are approaching your monthly quota for Qodo. Upgrade your plan PR Compliance Guide 🔍Below is a summary of compliance checks for this PR:
Compliance status legend🟢 - Fully Compliant🟡 - Partial Compliant 🔴 - Not Compliant ⚪ - Requires Further Human Verification 🏷️ - Compliance label |
||||||||||||||||||||||||
ⓘ You are approaching your monthly quota for Qodo. Upgrade your plan PR Code Suggestions ✨Explore these optional code suggestions:
|
||||||||||||
There was a problem hiding this comment.
The refactor likely fixes the webpack runtime issue by preventing server-only AI SDK code from leaking into client bundles, but lib/utils/ai-model.ts introduces brittle env parsing and inconsistent behavior when a user-selected model is misconfigured. The try/catch fallback chain is probably ineffective because model factories usually don’t throw until request-time, and the Bedrock vision conditional is currently dead logic. Consider validating SPECIFIC_API_MODEL and making selected-model failures explicit instead of silently falling back.
Additional notes (1)
- Maintainability |
lib/utils/index.ts:1-1
lib/utils/index.tsremains a mixed bag of general utilities, and addinguuidhere can still cause bundling issues depending on where it’s imported. The intent of this PR is to prevent server-only code from leaking into client bundles; the same concern applies to keeping the “everything util barrel” pattern.
Even though getModel was extracted, lib/utils/index.ts is still a common import target and may be pulled into client code unnecessarily (especially if you later re-add server-only helpers).
Summary of changes
What changed
-
Split server-only model initialization out of
lib/utils/index.ts- Removed the server-side
getModel()implementation fromlib/utils/index.ts. - Added a new module
lib/utils/ai-model.tsexportinggetModel(requireVision?: boolean).
- Removed the server-side
-
Updated imports to avoid pulling server code into client bundles
- Replaced barrel import usage in
app/actions.tsxwith direct imports from individual agent modules. - Updated multiple call sites to import
getModelfrom../utils/ai-model/@/lib/utils/ai-model.
- Replaced barrel import usage in
-
Next.js build configuration
- Removed
QCXfromtranspilePackages, leaving onlymapbox_mcp.
- Removed
-
Model selection behavior tweaks
- Added support for
process.env.SPECIFIC_API_MODELoverride. - Introduced a
requireVisionflag to select vision-capable model IDs for some providers.
- Added support for
| export async function getModel(requireVision: boolean = false) { | ||
| // Check for specific API model override | ||
| if (process.env.SPECIFIC_API_MODEL) { | ||
| const provider = process.env.SPECIFIC_API_MODEL.split(':')[0]; | ||
| const modelId = process.env.SPECIFIC_API_MODEL.split(':').slice(1).join(':'); | ||
|
|
||
| if (provider === 'openai') { | ||
| return createOpenAI({ apiKey: process.env.OPENAI_API_KEY })(modelId); | ||
| } else if (provider === 'google') { | ||
| return createGoogleGenerativeAI({ apiKey: process.env.GEMINI_3_PRO_API_KEY })(modelId); | ||
| } else if (provider === 'xai') { | ||
| return createXai({ apiKey: process.env.XAI_API_KEY })(modelId); | ||
| } | ||
| } |
There was a problem hiding this comment.
SPECIFIC_API_MODEL parsing is performed multiple times (split(':') called twice), and it silently ignores unknown providers or malformed values. If the env var is misconfigured (e.g., missing :), modelId becomes '' and you’ll pass an empty model name into a provider factory—hard to diagnose in production.
Also, the override path bypasses requireVision handling entirely, which can produce surprising results when callers explicitly request vision support.
Suggestion
Centralize parsing/validation once, fail fast with a clear error, and optionally support a simple vision suffix or separate env var.
export async function getModel(requireVision = false) {
const specific = process.env.SPECIFIC_API_MODEL;
if (specific) {
const idx = specific.indexOf(':');
if (idx <= 0 || idx === specific.length - 1) {
throw new Error(
`Invalid SPECIFIC_API_MODEL format. Expected "provider:modelId", got "${specific}".`
);
}
const provider = specific.slice(0, idx);
const modelId = specific.slice(idx + 1);
switch (provider) {
case 'openai':
return createOpenAI({ apiKey: process.env.OPENAI_API_KEY })(modelId);
case 'google':
return createGoogleGenerativeAI({ apiKey: process.env.GEMINI_3_PRO_API_KEY })(modelId);
case 'xai':
return createXai({ apiKey: process.env.XAI_API_KEY })(modelId);
default:
throw new Error(`Unsupported SPECIFIC_API_MODEL provider: ${provider}`);
}
}
// ... existing selectedModel logic
}Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this suggestion.
| const xaiApiKey = process.env.XAI_API_KEY; | ||
| const gemini3ProApiKey = process.env.GEMINI_3_PRO_API_KEY; | ||
| const awsAccessKeyId = process.env.AWS_ACCESS_KEY_ID; | ||
| const awsSecretAccessKey = process.env.AWS_SECRET_ACCESS_KEY; | ||
| const awsRegion = process.env.AWS_REGION; | ||
| const bedrockModelId = process.env.BEDROCK_MODEL_ID || (requireVision ? 'anthropic.claude-3-5-sonnet-20241022-v2:0' : 'anthropic.claude-3-5-sonnet-20241022-v2:0'); | ||
| const openaiApiKey = process.env.OPENAI_API_KEY; | ||
|
|
There was a problem hiding this comment.
bedrockModelId uses a conditional with identical branches:
requireVision ? 'anthropic.claude-3-5-sonnet-20241022-v2:0' : 'anthropic.claude-3-5-sonnet-20241022-v2:0'This is dead logic and makes it look like vision selection is implemented when it isn’t.
Suggestion
Either remove requireVision from bedrockModelId or implement a real mapping (if you have distinct Bedrock model IDs for vision vs non-vision).
const bedrockModelId =
process.env.BEDROCK_MODEL_ID ?? 'anthropic.claude-3-5-sonnet-20241022-v2:0';(or)
const bedrockModelId = process.env.BEDROCK_MODEL_ID ??
(requireVision
? 'anthropic.claude-3-5-sonnet-20241022-v2:0' // vision-capable
: 'anthropic.claude-3-5-haiku-20241022-v1:0');Reply with "@CharlieHelps yes please" if you'd like me to add a commit with this suggestion.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@lib/utils/ai-model.ts`:
- Around line 99-103: The code currently calls createOpenAI({ apiKey:
openaiApiKey }) and returns openai('gpt-4o') even when OPENAI_API_KEY
(openaiApiKey) is undefined; add an explicit guard in the function that builds
the provider client (check the openaiApiKey variable /
process.env.OPENAI_API_KEY) and throw a clear configuration Error (e.g., "No
OpenAI API key configured; set OPENAI_API_KEY or provide another provider key")
before attempting to instantiate createOpenAI or call openai('gpt-4o'); update
any surrounding logic that falls back to OpenAI (the code around createOpenAI
and the return of openai('gpt-4o')) to only attempt instantiation when the key
exists.
- Around line 33-58: The selectedModel switch returns incorrect model IDs and
misorders fallbacks for structured-output: update the 'Grok 4.2' case in the
switch (where createXai(...) is used) to return 'grok-4-latest' (or vision
variant when requireVision), change the 'Gemini 3' case
(createGoogleGenerativeAI(...)) to return 'gemini-3-pro-preview' for Gemini 3
Pro, and change the 'GPT-5.1' case (createOpenAI(...)) to return the actual
GPT-5.1 model ID instead of 'gpt-4o'; also revise the function’s fallback
ordering so OpenAI (createOpenAI / gpt-5.1 or gpt-4o for compatibility) is
preferred for structured-output flows (generateObject/streamObject) because
xAI/grok lacks reliable structured-output support, ensuring vision variants
still honor requireVision where applicable.
- Around line 62-84: Add a new boolean parameter requireStructuredOutput to
getModel(requireVision?: boolean, requireStructuredOutput?: boolean) and, at the
top of the provider-selection logic in getModel, short-circuit to the OpenAI
provider (e.g., return openai('gpt-4o' or whatever OpenAI identifier is used in
this file) when requireStructuredOutput is true so xAI
(grok-beta/grok-vision-beta) is not chosen; update the provider-selection
branches to treat grok models as deprecated and only used when
requireStructuredOutput is false, and propagate this new flag from the
structured-output call sites by updating resolution-search.tsx,
task-manager.tsx, query-suggestor.tsx, suggest.ts, and inquire.tsx to pass
requireStructuredOutput:true when calling getModel or its wrappers.
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (12)
app/actions.tsxlib/actions/suggest.tslib/agents/inquire.tsxlib/agents/query-suggestor.tsxlib/agents/researcher.tsxlib/agents/resolution-search.tsxlib/agents/task-manager.tsxlib/agents/writer.tsxlib/utils/ai-model.tslib/utils/index.tsmapbox_mcp/hooks.tsnext.config.mjs
💤 Files with no reviewable changes (1)
- lib/utils/index.ts
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2026-01-13T13:26:30.086Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-13T13:26:30.086Z
Learning: Workers that rely on ai.generateObject (geojsonParser, map-command-generator, feedback-analyzer) must not use the X.AI 'grok-4-fast-non-reasoning' model because X.AI’s chat/completions rejects JSON Schema structured-output arguments; prefer OpenAI gpt-4o for structured outputs.
Applied to files:
lib/actions/suggest.tslib/utils/ai-model.tslib/agents/resolution-search.tsx
📚 Learning: 2026-01-17T06:14:51.070Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-17T06:14:51.070Z
Learning: For structured output tasks using ai.generateObject (resolution-search, geojsonParser, map-command-generator, feedback-analyzer), prefer OpenAI gpt-4o. While xAI's grok-4-1-fast-reasoning technically supports structured outputs and vision, OpenAI has proven more reliable integration with the AI SDK's generateObject function and Zod schemas in production.
Applied to files:
lib/utils/ai-model.ts
🧬 Code graph analysis (1)
lib/utils/ai-model.ts (1)
lib/actions/users.ts (1)
getSelectedModel(126-139)
🔇 Additional comments (10)
next.config.mjs (1)
12-12: LGTM – transpilePackages narrowed cleanly.Line 12 keeps transpilation scoped to the external package only; no concerns.
lib/actions/suggest.ts (1)
6-6: LGTM – import path updated.Line 6 aligns with the new ai‑model module.
lib/agents/researcher.tsx (1)
13-13: LGTM – import path updated.Line 13 matches the new utility module location.
lib/agents/resolution-search.tsx (1)
2-2: LGTM – import path updated.Line 2 is consistent with the new ai‑model module.
mapbox_mcp/hooks.ts (1)
3-3: LGTM – import path updated.Line 3 aligns with the refactor to
ai-model.lib/agents/query-suggestor.tsx (1)
6-6: LGTM – import path updated.Line 6 matches the new ai‑model utility.
lib/agents/task-manager.tsx (1)
3-3: LGTM: import path aligns with ai-model refactor.lib/agents/inquire.tsx (1)
5-5: LGTM: import source updated for getModel.app/actions.tsx (1)
15-19: LGTM: granular agent imports look clean.lib/agents/writer.tsx (1)
5-12: No action required—all writer() call sites have already been updated.
The only call site in the codebase (app/actions.tsx:476) correctly passesdynamicSystemPromptas the first argument, with all other arguments properly aligned to the updated function signature. No broken calls or missing parameters exist.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| switch (selectedModel) { | ||
| case 'Grok 4.2': | ||
| if (xaiApiKey) { | ||
| const xai = createXai({ | ||
| apiKey: xaiApiKey, | ||
| baseURL: 'https://api.x.ai/v1', | ||
| }); | ||
| return xai(requireVision ? 'grok-vision-beta' : 'grok-beta'); | ||
| } | ||
| break; | ||
| case 'Gemini 3': | ||
| if (gemini3ProApiKey) { | ||
| const google = createGoogleGenerativeAI({ | ||
| apiKey: gemini3ProApiKey, | ||
| }); | ||
| return google(requireVision ? 'gemini-1.5-pro' : 'gemini-1.5-pro'); | ||
| } | ||
| break; | ||
| case 'GPT-5.1': | ||
| if (openaiApiKey) { | ||
| const openai = createOpenAI({ | ||
| apiKey: openaiApiKey, | ||
| }); | ||
| return openai('gpt-4o'); | ||
| } | ||
| break; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n lib/utils/ai-model.tsRepository: QueueLab/QCX
Length of output: 4119
🌐 Web query:
xAI Grok current model ID 2026 ai-sdk grok-beta grok-vision-beta
💡 Result:
As of February 2026, the Vercel AI SDK xAI provider (@ai-sdk/xai) accepts these current Grok model IDs (you pass them as strings to xai(modelId) or xai.responses(modelId)):
-
Still valid (the ones you mentioned):
grok-beta(text) (ai-sdk.dev)grok-vision-beta(vision) (ai-sdk.dev)
-
Recommended “current”/maintained IDs (use
*-latestwhen you want the newest without changing code):- Text/chat:
grok-4-latest,grok-3-latest,grok-2-latest(ai-sdk.dev) - Vision:
grok-2-vision-latest(or pinnedgrok-2-vision-1212) (ai-sdk.dev)
- Text/chat:
Example:
import { xai } from "@ai-sdk/xai";
// text
const model = xai("grok-4-latest");
// vision (for image input in AI SDK docs)
const visionModel = xai.responses("grok-2-vision-latest");
``` ([ai-sdk.dev](https://ai-sdk.dev/providers/ai-sdk-providers/xai))
Citations:
- 1: https://ai-sdk.dev/providers/ai-sdk-providers/xai
- 2: https://ai-sdk.dev/providers/ai-sdk-providers/xai
- 3: https://ai-sdk.dev/providers/ai-sdk-providers/xai
- 4: https://ai-sdk.dev/providers/ai-sdk-providers/xai
- 5: https://ai-sdk.dev/providers/ai-sdk-providers/xai
---
🌐 Web query:
`Google Gemini 3 Pro current model ID 2026 ai-sdk gemini-1.5-pro`
💡 Result:
As of **February 4, 2026**, the **Gemini API** “Gemini 3 Pro” model ID you’d pass to an AI SDK is:
- **`gemini-3-pro-preview`** (Gemini 3 Pro Preview) [1][2]
(For the image-capable variant: **`gemini-3-pro-image-preview`**.) [1][2]
**Sources:** [1] [2]
---
🌐 Web query:
`OpenAI GPT-5.1 model ID 2026 gpt-4o current version`
💡 Result:
- **GPT‑5.1 (API) model ID (alias):** `gpt-5.1`
**Snapshot (versioned model ID):** `gpt-5.1-2025-11-13` (this is the only listed snapshot as of Feb 4, 2026). [1]
- **GPT‑4o (API) current model ID (alias):** `gpt-4o`
**Latest listed snapshot (versioned model ID):** `gpt-4o-2024-11-20` (newest date among the snapshots shown). [2]
**Sources:**
[1] https://platform.openai.com/docs/models/gpt-5.1/
[2] https://platform.openai.com/docs/models/gpt-4o
---
</details>
**Fix model ID mismatches in the user-facing labels and address structured-output safety.**
The labels "Grok 4.2", "Gemini 3", and "GPT-5.1" do not match the actual model IDs returned:
- **"Gemini 3"** (line 43) returns `gemini-1.5-pro`, which is outdated. Use `gemini-3-pro-preview` for the current Gemini 3 Pro model.
- **"GPT-5.1"** (line 51) returns `gpt-4o` instead of `gpt-5.1`. Users selecting "GPT-5.1" should receive the GPT-5.1 model, not GPT-4o.
- **"Grok 4.2"** (line 34) returns `grok-beta`, a generic beta version. Consider using `grok-4-latest` for consistency.
Additionally, the default fallback order (lines 62–84) prioritizes xAI (Grok) first. Per established guidance, structured-output flows using `generateObject` and `streamObject` must use OpenAI (`gpt-4o`), not xAI models which lack reliable structured-output support. Review whether this function is used in structured-output contexts and adjust the fallback order accordingly.
<details>
<summary>🤖 Prompt for AI Agents</summary>In @lib/utils/ai-model.ts around lines 33 - 58, The selectedModel switch returns
incorrect model IDs and misorders fallbacks for structured-output: update the
'Grok 4.2' case in the switch (where createXai(...) is used) to return
'grok-4-latest' (or vision variant when requireVision), change the 'Gemini 3'
case (createGoogleGenerativeAI(...)) to return 'gemini-3-pro-preview' for Gemini
3 Pro, and change the 'GPT-5.1' case (createOpenAI(...)) to return the actual
GPT-5.1 model ID instead of 'gpt-4o'; also revise the function’s fallback
ordering so OpenAI (createOpenAI / gpt-5.1 or gpt-4o for compatibility) is
preferred for structured-output flows (generateObject/streamObject) because
xAI/grok lacks reliable structured-output support, ensuring vision variants
still honor requireVision where applicable.
</details>
<!-- fingerprinting:phantom:medusa:eagle -->
<!-- This is an auto-generated comment by CodeRabbit -->
| // Default behavior: Grok -> Gemini -> Bedrock -> OpenAI | ||
| if (xaiApiKey) { | ||
| const xai = createXai({ | ||
| apiKey: xaiApiKey, | ||
| baseURL: 'https://api.x.ai/v1', | ||
| }); | ||
| try { | ||
| return xai(requireVision ? 'grok-vision-beta' : 'grok-beta'); | ||
| } catch (error) { | ||
| console.warn('xAI API unavailable, falling back to next provider'); | ||
| } | ||
| } | ||
|
|
||
| if (gemini3ProApiKey) { | ||
| const google = createGoogleGenerativeAI({ | ||
| apiKey: gemini3ProApiKey, | ||
| }); | ||
| try { | ||
| return google(requireVision ? 'gemini-1.5-pro' : 'gemini-1.5-pro'); | ||
| } catch (error) { | ||
| console.warn('Gemini 3 Pro API unavailable, falling back to next provider:', error); | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, locate and examine the ai-model.ts file
find . -name "ai-model.ts" -type fRepository: QueueLab/QCX
Length of output: 79
🏁 Script executed:
# Search for calls to getModel to see if structured outputs are needed
rg "getModel\(" --context=3Repository: QueueLab/QCX
Length of output: 4936
🏁 Script executed:
# Search for streamObject and generateObject usage
rg "streamObject|generateObject" --context=2Repository: QueueLab/QCX
Length of output: 3583
🏁 Script executed:
cat -n lib/utils/ai-model.tsRepository: QueueLab/QCX
Length of output: 4119
🌐 Web query:
xAI grok-vision-beta grok-beta structured output JSON schema support
💡 Result:
Yes—both grok-beta and grok-vision-beta support “Structured Outputs” with a JSON Schema via the response_format mechanism (i.e., OpenAI-style response_format: { type: "json_schema", json_schema: ... }). xAI’s Structured Outputs guide states the feature is supported by all language models, and the API will guarantee the response matches your schema. [1]
Notes / constraints from xAI’s docs:
- Supported schema constructs include:
string,number(integer/float),object,array,boolean,enum,anyOf;allOfis not supported (and some array/string length constraints aren’t supported). [1] grok-beta/grok-vision-betaare older/preview models and are now deprecated in xAI’s model lineup (xAI recommends thegrok-2-1212family instead). [2][3]
🌐 Web query:
grok-4-fast-non-reasoning vs grok-beta vs grok-vision-beta structured outputs AI SDK
💡 Result:
Model comparison (xAI API)
| Model | What it is | Vision (image-in) | Status | Structured outputs | Structured outputs + tools |
|---|---|---|---|---|---|
grok-4-1-fast-non-reasoning |
Grok 4.1 Fast variant optimized for instant responses (skips “thinking” phase). | Yes (multimodal) | Current | Yes (xAI says structured outputs work for all language models) | Yes, but only for Grok 4 family models (includes grok-4-1-fast-non-reasoning). [1][2] |
grok-beta |
Original public-beta Grok text model (128k context at launch). | No | Deprecated | Yes (as a language model) | Not called out as supported (Grok 4 family only). [3][1] |
grok-vision-beta (aka grok-vision-beta) |
Original public-beta Grok vision model. | Yes | Deprecated | Yes (as a language model) | Not called out as supported (Grok 4 family only). [3][1] |
Using Structured Outputs from the Vercel AI SDK
- Vercel AI SDK’s “structured outputs” feature is implemented via the OpenAI provider option
structuredOutputs: true(i.e., OpenAI-style JSON-schema structured outputs). [4] - However, “structured outputs with tools” in the AI SDK is currently only available with OpenAI models (per AI SDK 4.1 notes). [5]
- xAI recommends using the Vercel AI SDK for JavaScript, and their OpenAI-compatible base URL is
https://api.x.ai/v1. [6]
Practical implication:
- If you just need JSON/object output from Grok in an AI SDK app, try AI SDK
generateObjectwithstructuredOutputs: trueagainsthttps://api.x.ai/v1(OpenAI-compatible). [4][6] - If you need Grok tool-calling + guaranteed structured output, xAI documents that capability as supported for Grok 4 family models—use xAI’s structured-output flow/tooling (per their docs/SDK) rather than relying on AI SDK’s “structured outputs with tools” feature. [1]
Sources:
[1] xAI Structured Outputs guide (supported models; Grok 4 family requirement for structured outputs + tools) (docs.x.ai)
[2] xAI Grok 4.1 Fast announcement (model variants) (x.ai)
[3] xAI Models & Pricing (grok-beta and grok-vision-beta deprecated) (grok.cadn.net.cn)
[4] AI SDK OpenAI provider guide (how structuredOutputs: true is used with generateObject) (ai-sdk.dev)
[5] Vercel AI SDK 4.1 blog (structured outputs with tools currently only for OpenAI models) (vercel.com)
[6] xAI Migration guide (recommends Vercel AI SDK; base URL https://api.x.ai/v1) (docs.x.ai)
Citations:
- 1: https://docs.x.ai/docs/guides/structured-outputs
- 2: https://x.ai/news/grok-4-1-fast/?utm_source=openai
- 3: https://grok.cadn.net.cn/docs/models.html?utm_source=openai
- 4: https://ai-sdk.dev/docs/guides/providers/openai-functions?utm_source=openai
- 5: https://vercel.com/blog/ai-sdk-4-1?utm_source=openai
- 6: https://docs.x.ai/docs/guides/migration?utm_source=openai
Add requireStructuredOutput flag to getModel() to bypass xAI for structured output tasks.
Lines 62–84 prioritize xAI (grok-beta/grok-vision-beta) when available, but multiple call sites use streamObject and generateObject for structured outputs. The Vercel AI SDK's structured output integration with xAI is unreliable; in production, structured output tasks should route to OpenAI gpt-4o. Additionally, grok-beta and grok-vision-beta are deprecated models.
Add a requireStructuredOutput parameter to getModel() and short-circuit to OpenAI when true. Update structured output call sites to pass this flag: resolution-search.tsx, task-manager.tsx, query-suggestor.tsx, suggest.ts, and inquire.tsx.
🔧 Suggested safeguard for structured output
-export async function getModel(requireVision: boolean = false) {
+export async function getModel(
+ requireVision: boolean = false,
+ requireStructuredOutput: boolean = false
+) {
// Check for specific API model override
if (process.env.SPECIFIC_API_MODEL) {
const provider = process.env.SPECIFIC_API_MODEL.split(':')[0];
const modelId = process.env.SPECIFIC_API_MODEL.split(':').slice(1).join(':');
@@
- const openaiApiKey = process.env.OPENAI_API_KEY;
+ const openaiApiKey = process.env.OPENAI_API_KEY;
+
+ if (requireStructuredOutput && openaiApiKey) {
+ const openai = createOpenAI({ apiKey: openaiApiKey });
+ return openai('gpt-4o');
+ }🤖 Prompt for AI Agents
In `@lib/utils/ai-model.ts` around lines 62 - 84, Add a new boolean parameter
requireStructuredOutput to getModel(requireVision?: boolean,
requireStructuredOutput?: boolean) and, at the top of the provider-selection
logic in getModel, short-circuit to the OpenAI provider (e.g., return
openai('gpt-4o' or whatever OpenAI identifier is used in this file) when
requireStructuredOutput is true so xAI (grok-beta/grok-vision-beta) is not
chosen; update the provider-selection branches to treat grok models as
deprecated and only used when requireStructuredOutput is false, and propagate
this new flag from the structured-output call sites by updating
resolution-search.tsx, task-manager.tsx, query-suggestor.tsx, suggest.ts, and
inquire.tsx to pass requireStructuredOutput:true when calling getModel or its
wrappers.
| const openai = createOpenAI({ | ||
| apiKey: openaiApiKey, | ||
| }); | ||
| return openai('gpt-4o'); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Guard against missing provider keys before OpenAI fallback.
Line 99-103 instantiates OpenAI even if OPENAI_API_KEY is undefined, which leads to a late 401/invalid-auth error. Add a clear configuration error when no provider keys are available.
🛡️ Explicit guard for missing keys
- const openai = createOpenAI({
- apiKey: openaiApiKey,
- });
- return openai('gpt-4o');
+ if (!openaiApiKey) {
+ throw new Error('No AI provider API key configured');
+ }
+ const openai = createOpenAI({ apiKey: openaiApiKey });
+ return openai('gpt-4o');🤖 Prompt for AI Agents
In `@lib/utils/ai-model.ts` around lines 99 - 103, The code currently calls
createOpenAI({ apiKey: openaiApiKey }) and returns openai('gpt-4o') even when
OPENAI_API_KEY (openaiApiKey) is undefined; add an explicit guard in the
function that builds the provider client (check the openaiApiKey variable /
process.env.OPENAI_API_KEY) and throw a clear configuration Error (e.g., "No
OpenAI API key configured; set OPENAI_API_KEY or provide another provider key")
before attempting to instantiate createOpenAI or call openai('gpt-4o'); update
any surrounding logic that falls back to OpenAI (the code around createOpenAI
and the return of openai('gpt-4o')) to only attempt instantiation when the key
exists.
…s and using dynamic imports This second iteration addresses the persistent "Cannot read properties of undefined (reading 'call')" Webpack error in the Vercel production environment. Key refinements: - Refactored \`lib/utils/ai-model.ts\` to use dynamic \`import()\` for all AI provider SDKs. This ensures that these modules are not required during initial chunk loading, which is the most common cause of module factory undefined errors in Next.js. - Decoupled \`DrawnFeature\` and other geospatial types into a dedicated \`lib/types/geospatial.ts\` to break circular dependency chains between agents. - Removed \`transpilePackages\` from \`next.config.mjs\` for the local folder, as it was likely causing Webpack to mismanage module IDs. - Standardized imports across agents and server actions to avoid barrel file pitfalls. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
…duction This commit implements a comprehensive set of architectural fixes to eliminate the persistent "Cannot read properties of undefined (reading 'call')" error in the Vercel production environment. Key changes: - Added 'use server' to all agent files in lib/agents/ to ensure strict server-side execution and bundle isolation. - Implemented dynamic import() for @modelcontextprotocol/sdk and other external dependencies in geospatial.tsx and ai-model.ts to prevent initialization failures during chunk loading. - Removed the lib/agents/index.tsx barrel file to eliminate potential circular dependency paths and massive module graphs. - Decoupled shared types into lib/types/geospatial.ts to break dependency cycles. - Standardized all internal imports to use direct paths. Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
…duction This commit implements a comprehensive set of architectural fixes to eliminate the persistent "Cannot read properties of undefined (reading 'call')" error in the Vercel production environment. Key changes: - Added 'use server' to all agent files in lib/agents/ to ensure strict server-side execution and bundle isolation. - Implemented dynamic import() for @modelcontextprotocol/sdk and other external dependencies in geospatial.tsx and ai-model.ts to prevent initialization failures during chunk loading. - Removed the lib/agents/index.tsx barrel file to eliminate potential circular dependency paths and massive module graphs. - Decoupled shared types into lib/types/geospatial.ts to break dependency cycles. - Standardized all internal imports to use direct paths. - Verified local build success with "bun run build". Co-authored-by: ngoiyaeric <115367894+ngoiyaeric@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
lib/agents/task-manager.tsx (1)
1-22:⚠️ Potential issue | 🔴 Critical
getModel()will returngrok-beta, which rejects JSON Schema structured outputs, causinggenerateObjectto fail at runtime intaskManager.The default model selection (lines 67–74 in
lib/utils/ai-model.ts) prioritizes X.AI'sgrok-betawhenXAI_API_KEYis available. SincetaskManagercallsgenerateObject()withnextActionSchemafor structured output, this creates a runtime failure. X.AI's grok models do not support JSON Schema structured outputs in their chat/completions API.Ensure
taskManageruses a compatible model for structured output. Either modifygetModel()to skip grok forgenerateObjecttasks, or explicitly selectgpt-4oor Bedrock (Claude 3.5) which reliably support JSON Schema.
🤖 Fix all issues with AI agents
In `@lib/agents/tools/geospatial.tsx`:
- Around line 23-66: Change the loose any types to the MCP client and uiStream
concrete types: import the Client and StreamableHTTPClientTransport types (as
type-only imports if needed) from the MCP SDK and update getConnectedMcpClient
to return Promise<Client | null> (and use Client instead of any for the local
client variable), change closeClient signature to accept client: Client | null,
and update geospatialTool's parameter type from { uiStream: any, mapProvider?:
MapProvider } to use the proper uiStream type (and include the MapProvider type
if missing); verify the MCP SDK type entrypoint (e.g. the SDK's index.d.ts)
supports type-only imports so these changes are type-only and do not alter
runtime behavior.
In `@lib/types/geospatial.ts`:
- Around line 1-6: The DrawnFeature interface uses geometry: any which leaks
unsafe typing; replace it with the appropriate GeoJSON geometry union (e.g.,
GeoJSON.Geometry or specific types like GeoJSON.Polygon | GeoJSON.LineString) by
importing the GeoJSON types from the geojson package and updating the geometry
property on DrawnFeature; before changing, verify the geojson package is
installed and available in package.json/lock and add it if missing, then update
imports/usages referencing DrawnFeature to match the tightened type.
In `@lib/utils/ai-model.ts`:
- Around line 92-103: The Bedrock initialization currently checks only
awsAccessKeyId and awsSecretAccessKey; update the guard to require awsRegion as
well and wrap the import/creation in a try-catch so it gracefully falls back on
failure. Specifically, modify the conditional that gates
createAmazonBedrock(...) to include awsRegion and add error handling around the
dynamic import and bedrock(bedrockModelId) call (references:
createAmazonBedrock, bedrock, bedrockModelId, awsAccessKeyId,
awsSecretAccessKey, awsRegion) so failures are logged/handled and do not crash
when region is missing or initialization fails.
- Around line 7-20: The SPECIFIC_API_MODEL override path currently constructs
provider clients without validating API keys; before calling
createOpenAI/createGoogleGenerativeAI/createXai, add explicit checks that the
corresponding env vars (OPENAI_API_KEY for provider 'openai',
GEMINI_3_PRO_API_KEY for 'google', XAI_API_KEY for 'xai') are present and
non-empty, and if missing throw or return a clear error (or log and exit)
indicating the missing key and referencing SPECIFIC_API_MODEL so auth failures
are descriptive.
📜 Review details
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (12)
app/actions.tsxlib/agents/index.tsxlib/agents/inquire.tsxlib/agents/query-suggestor.tsxlib/agents/researcher.tsxlib/agents/resolution-search.tsxlib/agents/task-manager.tsxlib/agents/tools/geospatial.tsxlib/agents/writer.tsxlib/types/geospatial.tslib/utils/ai-model.tsnext.config.mjs
💤 Files with no reviewable changes (1)
- lib/agents/index.tsx
🧰 Additional context used
🧠 Learnings (4)
📓 Common learnings
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-16T10:02:40.220Z
Learning: The router agent and resolution search agent must implement a feedback loop where tool execution results (including GeoTIFF URLs, embeddings, and satellite analysis data) are fed back to the AI model's context through enriched messages before generating the final response, ensuring the foundational model receives and can synthesize all satellite intelligence data.
📚 Learning: 2026-01-13T13:26:30.086Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-13T13:26:30.086Z
Learning: Workers that rely on ai.generateObject (geojsonParser, map-command-generator, feedback-analyzer) must not use the X.AI 'grok-4-fast-non-reasoning' model because X.AI’s chat/completions rejects JSON Schema structured-output arguments; prefer OpenAI gpt-4o for structured outputs.
Applied to files:
lib/utils/ai-model.tslib/agents/resolution-search.tsxlib/agents/tools/geospatial.tsx
📚 Learning: 2026-01-17T06:14:51.070Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-17T06:14:51.070Z
Learning: For structured output tasks using ai.generateObject (resolution-search, geojsonParser, map-command-generator, feedback-analyzer), prefer OpenAI gpt-4o. While xAI's grok-4-1-fast-reasoning technically supports structured outputs and vision, OpenAI has proven more reliable integration with the AI SDK's generateObject function and Zod schemas in production.
Applied to files:
lib/utils/ai-model.tslib/agents/resolution-search.tsx
📚 Learning: 2026-01-16T10:02:40.220Z
Learnt from: ngoiyaeric
Repo: QueueLab/QCX PR: 0
File: :0-0
Timestamp: 2026-01-16T10:02:40.220Z
Learning: The router agent and resolution search agent must implement a feedback loop where tool execution results (including GeoTIFF URLs, embeddings, and satellite analysis data) are fed back to the AI model's context through enriched messages before generating the final response, ensuring the foundational model receives and can synthesize all satellite intelligence data.
Applied to files:
lib/agents/resolution-search.tsx
🧬 Code graph analysis (2)
lib/utils/ai-model.ts (1)
lib/actions/users.ts (1)
getSelectedModel(126-139)
lib/agents/tools/geospatial.tsx (3)
lib/schema/geospatial.tsx (1)
geospatialQuerySchema(13-121)components/message.tsx (1)
BotMessage(11-31)lib/actions/users.ts (1)
getSelectedModel(126-139)
🔇 Additional comments (12)
app/actions.tsx (1)
15-20: LGTM — explicit agent imports and centralized DrawnFeature type.next.config.mjs (1)
12-12: LGTM — config cleanup is clear and isolated.lib/agents/tools/geospatial.tsx (2)
12-18: LGTM — centralized geospatial types and Google static map helper.
84-228: LGTM — clearer status updates and more robust response parsing.lib/agents/query-suggestor.tsx (1)
1-8: LGTM — server directive and ai-model import update.lib/agents/inquire.tsx (1)
1-7: LGTM — server directive and ai-model import update.lib/agents/researcher.tsx (2)
1-1: Server-only boundary looks right.
Keeps model/tool execution on the server where it belongs.
15-17: Import refactor looks good.
CentralizinggetModeland sharingDrawnFeaturevia the geospatial types is a clean split.lib/agents/writer.tsx (2)
1-7: Server-only directive and model import update look good.
9-14: Allwriter()call sites have been correctly updated with the newdynamicSystemPromptparameter.The single call site at
app/actions.tsx:477passes all parameters in the correct order:currentSystemPrompt(as the new first parameter), followed byuiStream,streamText, andlatestMessages. No runtime argument shifting issues exist.lib/agents/resolution-search.tsx (2)
1-1: Server-only directive is appropriate here.
4-6: Import updates look good.
Usinglib/utils/ai-modeland shared geospatial types keeps the module boundaries clean.
✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.
| async function getConnectedMcpClient(): Promise<any | null> { | ||
| const composioApiKey = process.env.COMPOSIO_API_KEY; | ||
| const mapboxAccessToken = process.env.MAPBOX_ACCESS_TOKEN; | ||
| const composioUserId = process.env.COMPOSIO_USER_ID; | ||
|
|
||
| console.log('[GeospatialTool] Environment check:', { | ||
| composioApiKey: composioApiKey ? `${composioApiKey.substring(0, 8)}...` : 'MISSING', | ||
| mapboxAccessToken: mapboxAccessToken ? `${mapboxAccessToken.substring(0, 8)}...` : 'MISSING', | ||
| composioUserId: composioUserId ? `${composioUserId.substring(0, 8)}...` : 'MISSING', | ||
| }); | ||
|
|
||
| if (!composioApiKey || !mapboxAccessToken || !composioUserId || !composioApiKey.trim() || !mapboxAccessToken.trim() || !composioUserId.trim()) { | ||
| console.error('[GeospatialTool] Missing or empty required environment variables'); | ||
| return null; | ||
| } | ||
|
|
||
| // Load config from file or fallback | ||
| let config; | ||
| try { | ||
| // Use static import for config | ||
| let mapboxMcpConfig; | ||
| try { | ||
| mapboxMcpConfig = require('../../../mapbox_mcp_config.json'); | ||
| config = { ...mapboxMcpConfig, mapboxAccessToken }; | ||
| console.log('[GeospatialTool] Config loaded successfully'); | ||
| } catch (configError: any) { | ||
| throw configError; | ||
| } | ||
| } catch (configError: any) { | ||
| console.error('[GeospatialTool] Failed to load mapbox config:', configError.message); | ||
| config = { mapboxAccessToken, version: '1.0.0', name: 'mapbox-mcp-server' }; | ||
| console.log('[GeospatialTool] Using fallback config'); | ||
| } | ||
|
|
||
| // Build Composio MCP server URL | ||
| // Note: This should be migrated to use Composio SDK directly instead of MCP client | ||
| // For now, constructing URL directly without Smithery SDK | ||
| let serverUrlToUse: URL; | ||
| try { | ||
| // Construct URL with Composio credentials | ||
| const baseUrl = 'https://api.composio.dev/v1/mcp/mapbox'; | ||
| serverUrlToUse = new URL(baseUrl); | ||
| serverUrlToUse.searchParams.set('api_key', composioApiKey); | ||
| serverUrlToUse.searchParams.set('user_id', composioUserId); | ||
|
|
||
| const urlDisplay = serverUrlToUse.toString().split('?')[0]; | ||
| console.log('[GeospatialTool] Composio MCP Server URL created:', urlDisplay); | ||
|
|
||
| if (!serverUrlToUse.href || !serverUrlToUse.href.startsWith('https://')) { | ||
| throw new Error('Invalid server URL generated'); | ||
| } | ||
| } catch (urlError: any) { | ||
| console.error('[GeospatialTool] Error creating Composio URL:', urlError.message); | ||
| return null; | ||
| } | ||
|
|
||
| // Create transport | ||
| let transport; | ||
| try { | ||
| transport = new StreamableHTTPClientTransport(serverUrlToUse); | ||
| console.log('[GeospatialTool] Transport created successfully'); | ||
| } catch (transportError: any) { | ||
| console.error('[GeospatialTool] Failed to create transport:', transportError.message); | ||
| return null; | ||
| } | ||
|
|
||
| // Create client | ||
| let client; | ||
| try { | ||
| client = new MCPClientClass({ name: 'GeospatialToolClient', version: '1.0.0' }); | ||
| console.log('[GeospatialTool] MCP Client instance created'); | ||
| } catch (clientError: any) { | ||
| console.error('[GeospatialTool] Failed to create MCP client:', clientError.message); | ||
| return null; | ||
| } | ||
|
|
||
| // Connect to server | ||
| try { | ||
| console.log('[GeospatialTool] Attempting to connect to MCP server...'); | ||
| await Promise.race([ | ||
| client.connect(transport), | ||
| new Promise((_, reject) => setTimeout(() => reject(new Error('Connection timeout after 15 seconds')), 15000)), | ||
| ]); | ||
| console.log('[GeospatialTool] Successfully connected to MCP server'); | ||
| } catch (connectError: any) { | ||
| console.error('[GeospatialTool] MCP connection failed:', connectError.message); | ||
| // Dynamic imports to avoid Webpack issues with MCP SDK in production | ||
| const { Client } = await import('@modelcontextprotocol/sdk/client/index.js'); | ||
| const { StreamableHTTPClientTransport } = await import('@modelcontextprotocol/sdk/client/streamableHttp.js'); | ||
|
|
||
| const authConfigId = process.env.COMPOSIO_MAPBOX_AUTH_CONFIG_ID || 'mapbox'; | ||
| const baseUrl = 'https://backend.composio.dev/mcp/client/streamable'; | ||
| const url = `${baseUrl}?userId=${composioUserId}&authConfigId=${authConfigId}&mapboxApiKey=${mapboxAccessToken}&composioApiKey=${composioApiKey}`; | ||
|
|
||
| const transport = new StreamableHTTPClientTransport(new URL(url)); | ||
| const client = new Client( | ||
| { name: 'mapbox-mcp-client', version: '1.0.0' }, | ||
| { capabilities: {} } | ||
| ); | ||
|
|
||
| await client.connect(transport); | ||
| return client; | ||
| } catch (error) { | ||
| console.error('[GeospatialTool] Failed to connect to MCP server:', error); | ||
| return null; | ||
| } | ||
|
|
||
| // List tools | ||
| try { | ||
| const tools = await client.listTools(); | ||
| console.log('[GeospatialTool] Available tools:', tools.tools?.map(t => t.name) || []); | ||
| } catch (listError: any) { | ||
| console.warn('[GeospatialTool] Could not list tools:', listError.message); | ||
| } | ||
|
|
||
| return client; | ||
| } | ||
|
|
||
| /** | ||
| * Safely close the MCP client with timeout. | ||
| */ | ||
| async function closeClient(client: McpClient | null) { | ||
| if (!client) return; | ||
| try { | ||
| await Promise.race([ | ||
| client.close(), | ||
| new Promise((_, reject) => setTimeout(() => reject(new Error('Close timeout after 5 seconds')), 5000)), | ||
| ]); | ||
| console.log('[GeospatialTool] MCP client closed successfully'); | ||
| } catch (error: any) { | ||
| console.error('[GeospatialTool] Error closing MCP client:', error.message); | ||
| async function closeClient(client: any) { | ||
| if (client) { | ||
| try { | ||
| await client.close(); | ||
| } catch (error) { | ||
| console.warn('[GeospatialTool] Error closing client:', error); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| /** | ||
| * Helper to generate a Google Static Map URL | ||
| */ | ||
| function getGoogleStaticMapUrl(latitude: number, longitude: number): string { | ||
| const apiKey = process.env.NEXT_PUBLIC_GOOGLE_MAPS_API_KEY || process.env.GOOGLE_MAPS_API_KEY; | ||
| if (!apiKey) return ''; | ||
| return `https://maps.googleapis.com/maps/api/staticmap?center=${latitude},${longitude}&zoom=15&size=640x480&scale=2&markers=color:red%7C${latitude},${longitude}&key=${apiKey}`; | ||
| } | ||
|
|
||
| /** | ||
| * Main geospatial tool executor. | ||
| */ | ||
| export const geospatialTool = ({ | ||
| uiStream, | ||
| mapProvider | ||
| }: { | ||
| uiStream: ReturnType<typeof createStreamableUI> | ||
| mapProvider?: MapProvider | ||
| }) => ({ | ||
| description: `Use this tool for location-based queries including: | ||
| There a plethora of tools inside this tool accessible on the mapbox mcp server where switch case into the tool of choice for that use case | ||
| If the Query is supposed to use multiple tools in a sequence you must access all the tools in the sequence and then provide a final answer based on the results of all the tools used. | ||
|
|
||
| Static image tool: | ||
|
|
||
| Generates static map images using the Mapbox static image API. Features include: | ||
|
|
||
| Custom map styles (streets, outdoors, satellite, etc.) | ||
| Adjustable image dimensions and zoom levels | ||
| Support for multiple markers with custom colors and labels | ||
| Overlay options including polylines and polygons | ||
| Auto-fitting to specified coordinates | ||
|
|
||
| Category search tool: | ||
|
|
||
| Performs a category search using the Mapbox Search Box category search API. Features include: | ||
| Search for points of interest by category (restaurants, hotels, gas stations, etc.) | ||
| Filtering by geographic proximity | ||
| Customizable result limits | ||
| Rich metadata for each result | ||
| Support for multiple languages | ||
|
|
||
| Reverse geocoding tool: | ||
|
|
||
| Performs reverse geocoding using the Mapbox geocoding V6 API. Features include: | ||
| Convert geographic coordinates to human-readable addresses | ||
| Customizable levels of detail (street, neighborhood, city, etc.) | ||
| Results filtering by type (address, poi, neighborhood, etc.) | ||
| Support for multiple languages | ||
| Rich location context information | ||
|
|
||
| Directions tool: | ||
|
|
||
| Fetches routing directions using the Mapbox Directions API. Features include: | ||
|
|
||
| Support for different routing profiles: driving (with live traffic or typical), walking, and cycling | ||
| Route from multiple waypoints (2-25 coordinate pairs) | ||
| Alternative routes option | ||
| Route annotations (distance, duration, speed, congestion) | ||
|
|
||
| Scheduling options: | ||
|
|
||
| Future departure time (depart_at) for driving and driving-traffic profiles | ||
| Desired arrival time (arrive_by) for driving profile only | ||
| Profile-specific optimizations: | ||
| Driving: vehicle dimension constraints (height, width, weight) | ||
| Exclusion options for routing: | ||
| Common exclusions: ferry routes, cash-only tolls | ||
| Driving-specific exclusions: tolls, motorways, unpaved roads, tunnels, country borders, state borders | ||
| Custom point exclusions (up to 50 geographic points to avoid) | ||
| GeoJSON geometry output format | ||
|
|
||
| Isochrone tool: | ||
|
|
||
| Computes areas that are reachable within a specified amount of times from a location using Mapbox Isochrone API. Features include: | ||
|
|
||
| Support for different travel profiles (driving, walking, cycling) | ||
| Customizable travel times or distances | ||
| Multiple contour generation (e.g., 15, 30, 45 minute ranges) | ||
| Optional departure or arrival time specification | ||
| Color customization for visualization | ||
|
|
||
| Search and geocode tool: | ||
| Uses the Mapbox Search Box Text Search API endpoint to power searching for and geocoding POIs, addresses, places, and any other types supported by that API. This tool consolidates the functionality that was previously provided by the ForwardGeocodeTool and PoiSearchTool (from earlier versions of this MCP server) into a single tool.` | ||
|
|
||
|
|
||
| , | ||
| export const geospatialTool = ({ uiStream, mapProvider }: { uiStream: any, mapProvider?: MapProvider }) => ({ |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Prefer typed MCP client and uiStream over any.
This will improve safety without changing runtime behavior. Please verify the MCP SDK’s type entrypoint for the type-only import.
♻️ Suggested typing update
import { createStreamableUI, createStreamableValue } from 'ai/rsc';
+import type { Client } from '@modelcontextprotocol/sdk/client';
-async function getConnectedMcpClient(): Promise<any | null> {
+async function getConnectedMcpClient(): Promise<Client | null> {
// ...
}
-async function closeClient(client: any) {
+async function closeClient(client: Client | null) {
if (client) {
try {
await client.close();
} catch (error) {
console.warn('[GeospatialTool] Error closing client:', error);
}
}
}
-export const geospatialTool = ({ uiStream, mapProvider }: { uiStream: any, mapProvider?: MapProvider }) => ({
+export const geospatialTool = ({ uiStream, mapProvider }: { uiStream: ReturnType<typeof createStreamableUI>, mapProvider?: MapProvider }) => ({🤖 Prompt for AI Agents
In `@lib/agents/tools/geospatial.tsx` around lines 23 - 66, Change the loose any
types to the MCP client and uiStream concrete types: import the Client and
StreamableHTTPClientTransport types (as type-only imports if needed) from the
MCP SDK and update getConnectedMcpClient to return Promise<Client | null> (and
use Client instead of any for the local client variable), change closeClient
signature to accept client: Client | null, and update geospatialTool's parameter
type from { uiStream: any, mapProvider?: MapProvider } to use the proper
uiStream type (and include the MapProvider type if missing); verify the MCP SDK
type entrypoint (e.g. the SDK's index.d.ts) supports type-only imports so these
changes are type-only and do not alter runtime behavior.
| export interface DrawnFeature { | ||
| id: string; | ||
| type: 'Polygon' | 'LineString'; | ||
| measurement: string; | ||
| geometry: any; | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Tighten geometry typing to avoid any leakage.
Consider using GeoJSON types for stronger contracts. Please verify the GeoJSON type package is available before adopting this.
♻️ Suggested type tightening
+import type { Geometry } from 'geojson';
+
export interface DrawnFeature {
id: string;
type: 'Polygon' | 'LineString';
measurement: string;
- geometry: any;
+ geometry: Geometry;
}🤖 Prompt for AI Agents
In `@lib/types/geospatial.ts` around lines 1 - 6, The DrawnFeature interface uses
geometry: any which leaks unsafe typing; replace it with the appropriate GeoJSON
geometry union (e.g., GeoJSON.Geometry or specific types like GeoJSON.Polygon |
GeoJSON.LineString) by importing the GeoJSON types from the geojson package and
updating the geometry property on DrawnFeature; before changing, verify the
geojson package is installed and available in package.json/lock and add it if
missing, then update imports/usages referencing DrawnFeature to match the
tightened type.
| if (process.env.SPECIFIC_API_MODEL) { | ||
| const provider = process.env.SPECIFIC_API_MODEL.split(':')[0]; | ||
| const modelId = process.env.SPECIFIC_API_MODEL.split(':').slice(1).join(':'); | ||
|
|
||
| if (provider === 'openai') { | ||
| const { createOpenAI } = await import('@ai-sdk/openai'); | ||
| return createOpenAI({ apiKey: process.env.OPENAI_API_KEY })(modelId); | ||
| } else if (provider === 'google') { | ||
| const { createGoogleGenerativeAI } = await import('@ai-sdk/google'); | ||
| return createGoogleGenerativeAI({ apiKey: process.env.GEMINI_3_PRO_API_KEY })(modelId); | ||
| } else if (provider === 'xai') { | ||
| const { createXai } = await import('@ai-sdk/xai'); | ||
| return createXai({ apiKey: process.env.XAI_API_KEY })(modelId); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, locate and examine the file
find . -name "ai-model.ts" -type fRepository: QueueLab/QCX
Length of output: 79
🏁 Script executed:
cat -n ./lib/utils/ai-model.tsRepository: QueueLab/QCX
Length of output: 4490
Add API key validation for the SPECIFIC_API_MODEL override path.
Lines 11–19 instantiate providers without validating required API keys. Unlike the fallback paths below (lines 33–110), this creates a consistency issue and allows opaque SDK auth failures. Add explicit checks before provider instantiation.
🛡️ Suggested guardrails for override path
if (process.env.SPECIFIC_API_MODEL) {
const provider = process.env.SPECIFIC_API_MODEL.split(':')[0];
const modelId = process.env.SPECIFIC_API_MODEL.split(':').slice(1).join(':');
if (provider === 'openai') {
+ if (!process.env.OPENAI_API_KEY) {
+ throw new Error('SPECIFIC_API_MODEL=openai requires OPENAI_API_KEY');
+ }
const { createOpenAI } = await import('@ai-sdk/openai');
return createOpenAI({ apiKey: process.env.OPENAI_API_KEY })(modelId);
} else if (provider === 'google') {
+ if (!process.env.GEMINI_3_PRO_API_KEY) {
+ throw new Error('SPECIFIC_API_MODEL=google requires GEMINI_3_PRO_API_KEY');
+ }
const { createGoogleGenerativeAI } = await import('@ai-sdk/google');
return createGoogleGenerativeAI({ apiKey: process.env.GEMINI_3_PRO_API_KEY })(modelId);
} else if (provider === 'xai') {
+ if (!process.env.XAI_API_KEY) {
+ throw new Error('SPECIFIC_API_MODEL=xai requires XAI_API_KEY');
+ }
const { createXai } = await import('@ai-sdk/xai');
return createXai({ apiKey: process.env.XAI_API_KEY })(modelId);
}
}🤖 Prompt for AI Agents
In `@lib/utils/ai-model.ts` around lines 7 - 20, The SPECIFIC_API_MODEL override
path currently constructs provider clients without validating API keys; before
calling createOpenAI/createGoogleGenerativeAI/createXai, add explicit checks
that the corresponding env vars (OPENAI_API_KEY for provider 'openai',
GEMINI_3_PRO_API_KEY for 'google', XAI_API_KEY for 'xai') are present and
non-empty, and if missing throw or return a clear error (or log and exit)
indicating the missing key and referencing SPECIFIC_API_MODEL so auth failures
are descriptive.
| if (awsAccessKeyId && awsSecretAccessKey) { | ||
| const { createAmazonBedrock } = await import('@ai-sdk/amazon-bedrock'); | ||
| const bedrock = createAmazonBedrock({ | ||
| bedrockOptions: { | ||
| region: awsRegion, | ||
| credentials: { | ||
| accessKeyId: awsAccessKeyId, | ||
| secretAccessKey: awsSecretAccessKey, | ||
| }, | ||
| }, | ||
| }); | ||
| return bedrock(bedrockModelId); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's locate and read the ai-model.ts file
fd -t f ai-model.tsRepository: QueueLab/QCX
Length of output: 77
🏁 Script executed:
# Read the ai-model.ts file with line numbers
cat -n lib/utils/ai-model.tsRepository: QueueLab/QCX
Length of output: 4490
🏁 Script executed:
# Check if there's any error handling around Bedrock initialization or other Bedrock usage in the codebase
rg -i bedrock -A 5 -B 5Repository: QueueLab/QCX
Length of output: 10664
Add AWS region check to Bedrock initialization guard.
Lines 92–103 check AWS credentials but not AWS_REGION. Unlike xAI and Gemini providers which have error handling, Bedrock lacks both a region guard and try-catch. If AWS credentials are set without a region, initialization will fail without fallback. Add awsRegion to the condition:
- if (awsAccessKeyId && awsSecretAccessKey) {
+ if (awsAccessKeyId && awsSecretAccessKey && awsRegion) {
const { createAmazonBedrock } = await import('@ai-sdk/amazon-bedrock');
const bedrock = createAmazonBedrock({
bedrockOptions: {
region: awsRegion,
credentials: {
accessKeyId: awsAccessKeyId,
secretAccessKey: awsSecretAccessKey,
},
},
});
return bedrock(bedrockModelId);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (awsAccessKeyId && awsSecretAccessKey) { | |
| const { createAmazonBedrock } = await import('@ai-sdk/amazon-bedrock'); | |
| const bedrock = createAmazonBedrock({ | |
| bedrockOptions: { | |
| region: awsRegion, | |
| credentials: { | |
| accessKeyId: awsAccessKeyId, | |
| secretAccessKey: awsSecretAccessKey, | |
| }, | |
| }, | |
| }); | |
| return bedrock(bedrockModelId); | |
| if (awsAccessKeyId && awsSecretAccessKey && awsRegion) { | |
| const { createAmazonBedrock } = await import('@ai-sdk/amazon-bedrock'); | |
| const bedrock = createAmazonBedrock({ | |
| bedrockOptions: { | |
| region: awsRegion, | |
| credentials: { | |
| accessKeyId: awsAccessKeyId, | |
| secretAccessKey: awsSecretAccessKey, | |
| }, | |
| }, | |
| }); | |
| return bedrock(bedrockModelId); | |
| } |
🤖 Prompt for AI Agents
In `@lib/utils/ai-model.ts` around lines 92 - 103, The Bedrock initialization
currently checks only awsAccessKeyId and awsSecretAccessKey; update the guard to
require awsRegion as well and wrap the import/creation in a try-catch so it
gracefully falls back on failure. Specifically, modify the conditional that
gates createAmazonBedrock(...) to include awsRegion and add error handling
around the dynamic import and bedrock(bedrockModelId) call (references:
createAmazonBedrock, bedrock, bedrockModelId, awsAccessKeyId,
awsSecretAccessKey, awsRegion) so failures are logged/handled and do not crash
when region is missing or initialization fails.
User description
This change addresses the "TypeError: Cannot read properties of undefined (reading 'call')"
error in the Vercel runtime logs for the Resolution Search feature.
Key changes:
lib/utils/index.tsto separate client-side utilities (likecn) fromserver-side AI model initialization (
getModel).QCXfromtranspilePackagesinnext.config.mjs.app/actions.tsx.getModelvision support.PR created automatically by Jules for task 7163342244925719940 started by @ngoiyaeric
PR Type
Bug fix, Enhancement
Description
Separated server-side AI model logic from client utilities
Created dedicated
lib/utils/ai-model.tsfor model initializationUpdated all imports across agents to use new module path
Removed
QCXfrom Next.js transpilePackages configurationOptimized imports in
app/actions.tsxwith explicit agent importsDiagram Walkthrough
File Walkthrough
10 files
Remove server-side AI model codeUpdate import path to new AI model moduleUpdate import path to new AI model moduleUpdate import path to new AI model moduleUpdate import path to new AI model moduleUpdate import path to new AI model moduleUpdate import path to new AI model moduleUpdate import path to new AI model moduleUpdate import path to new AI model moduleOptimize imports with explicit agent paths1 files
New dedicated AI model initialization module1 files
Remove QCX from transpilePackages configurationSummary by CodeRabbit
New Features
Refactor